Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 21
Filter
1.
Comput Biol Med ; 159: 106890, 2023 06.
Article in English | MEDLINE | ID: covidwho-2320334

ABSTRACT

BACKGROUND AND OBJECTIVES: The progression of pulmonary diseases is a complex progress. Timely predicting whether the patients will progress to the severe stage or not in its early stage is critical to take appropriate hospital treatment. However, this task suffers from the "insufficient and incomplete" data issue since it is clinically impossible to have adequate training samples for one patient at each day. Besides, the training samples are extremely imbalanced since the patients who will progress to the severe stage is far less than those who will not progress to the non-severe stage. METHOD: We consider the severity prediction of pulmonary diseases as a time estimation problem based on CT scans. To handle the issue of "insufficient and incomplete" training samples, we introduced label distribution learning (LDL). Specifically, we generate a label distribution for each patient, making a CT image contribute to not only the learning of its chronological day, but also the learning of its neighboring days. In addition, a cost-sensitive mechanism is introduced to explore the imbalance data issue. To identify the importance of pulmonary segments in pulmonary disease severity prediction, multi-kernel learning in composite kernel space is further incorporated and particle swarm optimization (PSO) is used to find the optimal kernel weights. RESULTS: We compare the performance of the proposed CS-LD-MKSVR algorithm with several classical machine learning algorithms and deep learning (DL) algorithms. The proposed method has obtained the best classification results on the in-house data, fully indicating its effectiveness in pulmonary disease severity prediction. CONTRIBUTIONS: The severity prediction of pulmonary diseases is considered as a time estimation problem, and label distribution is introduced to describe the conversion time from non-severe stage to severe stage. The cost-sensitive mechanism is also introduced to handle the data imbalance issue to further improve the classification performance.


Subject(s)
Algorithms , Lung Diseases , Humans , Lung Diseases/diagnostic imaging , Machine Learning , Tomography, X-Ray Computed
2.
Phys Med Biol ; 2021 Feb 19.
Article in English | MEDLINE | ID: covidwho-2281116

ABSTRACT

The worldwide spread of coronavirus disease (COVID-19) has become a threatening risk for global public health. It is of great importance to rapidly and accurately screen patients with COVID-19 from community acquired pneumonia (CAP). In this study, a total of 1658 patients with COVID-19 and 1027 CAP patients underwent thin-section CT. All images were preprocessed to obtain the segmentations of infections and lung fields. A set of handcrafted location-specific features was proposed to best capture the COVID-19 distribution pattern, in comparison to conventional CT severity score (CT-SS) and Radiomics features. An infection Size Aware Random Forest method (iSARF) was used for classification. Experimental results show that the proposed method yielded best performance when using the handcrafted features with sensitivity of 91.6%, specificity of 86.8%, and accuracy of 89.8% over state-of-the-art classifiers. Additional test on 734 subjects with thick slice images demonstrates great generalizability. It is anticipated that our proposed framework could assist clinical decision making. Furthermore, the data of extracted features will be made available after the review process.

3.
IEEE J Biomed Health Inform ; 24(10): 2798-2805, 2020 10.
Article in English | MEDLINE | ID: covidwho-2282971

ABSTRACT

Chest computed tomography (CT) becomes an effective tool to assist the diagnosis of coronavirus disease-19 (COVID-19). Due to the outbreak of COVID-19 worldwide, using the computed-aided diagnosis technique for COVID-19 classification based on CT images could largely alleviate the burden of clinicians. In this paper, we propose an Adaptive Feature Selection guided Deep Forest (AFS-DF) for COVID-19 classification based on chest CT images. Specifically, we first extract location-specific features from CT images. Then, in order to capture the high-level representation of these features with the relatively small-scale data, we leverage a deep forest model to learn high-level representation of the features. Moreover, we propose a feature selection method based on the trained deep forest model to reduce the redundancy of features, where the feature selection could be adaptively incorporated with the COVID-19 classification model. We evaluated our proposed AFS-DF on COVID-19 dataset with 1495 patients of COVID-19 and 1027 patients of community acquired pneumonia (CAP). The accuracy (ACC), sensitivity (SEN), specificity (SPE), AUC, precision and F1-score achieved by our method are 91.79%, 93.05%, 89.95%, 96.35%, 93.10% and 93.07%, respectively. Experimental results on the COVID-19 dataset suggest that the proposed AFS-DF achieves superior performance in COVID-19 vs. CAP classification, compared with 4 widely used machine learning methods.


Subject(s)
Betacoronavirus , Clinical Laboratory Techniques/statistics & numerical data , Coronavirus Infections/diagnostic imaging , Coronavirus Infections/diagnosis , Pneumonia, Viral/diagnostic imaging , Pneumonia, Viral/diagnosis , Tomography, X-Ray Computed/statistics & numerical data , COVID-19 , COVID-19 Testing , Computational Biology , Coronavirus Infections/classification , Databases, Factual/statistics & numerical data , Deep Learning , Humans , Neural Networks, Computer , Pandemics/classification , Pneumonia, Viral/classification , Radiographic Image Interpretation, Computer-Assisted/statistics & numerical data , Radiography, Thoracic/statistics & numerical data , SARS-CoV-2
4.
IEEE Trans Med Imaging ; PP2022 Dec 02.
Article in English | MEDLINE | ID: covidwho-2232644

ABSTRACT

With rapid worldwide spread of Coronavirus Disease 2019 (COVID-19), jointly identifying severe COVID-19 cases from mild ones and predicting the conversion time (from mild to severe) is essential to optimize the workflow and reduce the clinician's workload. In this study, we propose a novel framework for COVID-19 diagnosis, termed as Structural Attention Graph Neural Network (SAGNN), which can combine the multi-source information including features extracted from chest CT, latent lung structural distribution, and non-imaging patient information to conduct diagnosis of COVID-19 severity and predict the conversion time from mild to severe. Specifically, we first construct a graph to incorporate structural information of the lung and adopt graph attention network to iteratively update representations of lung segments. To distinguish different infection degrees of left and right lungs, we further introduce a structural attention mechanism. Finally, we introduce demographic information and develop a multi-task learning framework to jointly perform both tasks of classification and regression. Experiments are conducted on a real dataset with 1687 chest CT scans, which includes 1328 mild cases and 359 severe cases. Experimental results show that our method achieves the best classification (e.g., 86.86% in terms of Area Under Curve) and regression (e.g., 0.58 in terms of Correlation Coefficient) performance, compared with other comparison methods.

5.
Annu Rev Biomed Eng ; 24: 179-201, 2022 06 06.
Article in English | MEDLINE | ID: covidwho-1752919

ABSTRACT

The coronavirus disease 2019 (COVID-19) pandemic has imposed dramatic challenges to health-care organizations worldwide. To combat the global crisis, the use of thoracic imaging has played a major role in the diagnosis, prediction, and management of COVID-19 patients with moderate to severe symptoms or with evidence of worsening respiratory status. In response, the medical image analysis community acted quickly to develop and disseminate deep learning models and tools to meet the urgent need of managing and interpreting large amounts of COVID-19 imaging data. This review aims to not only summarize existing deep learning and medical image analysis methods but also offer in-depth discussions and recommendations for future investigations. We believe that the wide availability of high-quality, curated, and benchmarked COVID-19 imaging data sets offers the great promise of a transformative test bed to develop, validate, and disseminate novel deep learning methods in the frontiers of data science and artificial intelligence.


Subject(s)
COVID-19 , Deep Learning , Artificial Intelligence , COVID-19 Testing , Humans , SARS-CoV-2
6.
J Biomed Inform ; 127: 103999, 2022 03.
Article in English | MEDLINE | ID: covidwho-1654687

ABSTRACT

The coronavirus disease (COVID-19) has claimed the lives of over 350,000 people and infected more than 173 million people worldwide, it triggers researchers from diverse fields are accelerating their research to help diagnostics, therapies, and vaccines. Researchers also publish their recent research progress through scientific papers. However, manually writing the abstract of a paper is time-consuming, and it increases the writing burden of the researchers. Abstractive summarization technique which automatically provides researchers reliable draft abstracts, can alleviate this problem. In this work, we propose a linguistically enriched SciBERT-based summarization model for COVID-19 scientific papers, named COVIDSum. Specifically, we first extract salient sentences from source papers and construct word co-occurrence graphs. Then, we adopt a SciBERT-based sequence encoder and a Graph Attention Networks-based graph encoder to encode sentences and word co-occurrence graphs, respectively. Finally, we fuse the above two encodings and generate an abstractive summary of each scientific paper. When evaluated on the publicly available COVID-19 open research dataset, the performance of our proposed model achieves significant improvement compared with other document summarization models.


Subject(s)
COVID-19 , Humans , Language , Publishing , SARS-CoV-2
7.
IEEE Trans Med Imaging ; 41(1): 88-102, 2022 01.
Article in English | MEDLINE | ID: covidwho-1593541

ABSTRACT

Early and accurate severity assessment of Coronavirus disease 2019 (COVID-19) based on computed tomography (CT) images offers a great help to the estimation of intensive care unit event and the clinical decision of treatment planning. To augment the labeled data and improve the generalization ability of the classification model, it is necessary to aggregate data from multiple sites. This task faces several challenges including class imbalance between mild and severe infections, domain distribution discrepancy between sites, and presence of heterogeneous features. In this paper, we propose a novel domain adaptation (DA) method with two components to address these problems. The first component is a stochastic class-balanced boosting sampling strategy that overcomes the imbalanced learning problem and improves the classification performance on poorly-predicted classes. The second component is a representation learning that guarantees three properties: 1) domain-transferability by prototype triplet loss, 2) discriminant by conditional maximum mean discrepancy loss, and 3) completeness by multi-view reconstruction loss. Particularly, we propose a domain translator and align the heterogeneous data to the estimated class prototypes (i.e., class centers) in a hyper-sphere manifold. Experiments on cross-site severity assessment of COVID-19 from CT images show that the proposed method can effectively tackle the imbalanced learning problem and outperform recent DA approaches.


Subject(s)
COVID-19 , Humans , SARS-CoV-2 , Tomography, X-Ray Computed
8.
BMC Med Imaging ; 21(1): 154, 2021 10 21.
Article in English | MEDLINE | ID: covidwho-1546762

ABSTRACT

BACKGROUND: The outbreak of coronavirus disease 2019 (COVID-19) causes tens of million infection world-wide. Many machine learning methods have been proposed for the computer-aided diagnosis between COVID-19 and community-acquired pneumonia (CAP) from chest computed tomography (CT) images. Most of these methods utilized the location-specific handcrafted features based on the segmentation results to improve the diagnose performance. However, the prerequisite segmentation step is time-consuming and needs the intervention by lots of expert radiologists, which cannot be achieved in the areas with limited medical resources. METHODS: We propose a generative adversarial feature completion and diagnosis network (GACDN) that simultaneously generates handcrafted features by radiomic counterparts and makes accurate diagnoses based on both original and generated features. Specifically, we first calculate the radiomic features from the CT images. Then, in order to fast obtain the location-specific handcrafted features, we use the proposed GACDN to generate them by its corresponding radiomic features. Finally, we use both radiomic features and location-specific handcrafted features for COVID-19 diagnosis. RESULTS: For the performance of our generated location-specific handcrafted features, the results of four basic classifiers show that it has an average of 3.21% increase in diagnoses accuracy. Besides, the experimental results on COVID-19 dataset show that our proposed method achieved superior performance in COVID-19 vs. community acquired pneumonia (CAP) classification compared with the state-of-the-art methods. CONCLUSIONS: The proposed method significantly improves the diagnoses accuracy of COVID-19 vs. CAP in the condition of incomplete location-specific handcrafted features. Besides, it is also applicable in some regions lacking of expert radiologists and high-performance computing resources.


Subject(s)
COVID-19/diagnosis , Deep Learning , Diagnosis, Computer-Assisted/methods , Machine Learning , SARS-CoV-2 , Tomography, X-Ray Computed/methods , COVID-19/epidemiology , Humans
9.
IEEE Rev Biomed Eng ; 14: 4-15, 2021.
Article in English | MEDLINE | ID: covidwho-1501333

ABSTRACT

The pandemic of coronavirus disease 2019 (COVID-19) is spreading all over the world. Medical imaging such as X-ray and computed tomography (CT) plays an essential role in the global fight against COVID-19, whereas the recently emerging artificial intelligence (AI) technologies further strengthen the power of the imaging tools and help medical specialists. We hereby review the rapid responses in the community of medical imaging (empowered by AI) toward COVID-19. For example, AI-empowered image acquisition can significantly help automate the scanning procedure and also reshape the workflow with minimal contact to patients, providing the best protection to the imaging technicians. Also, AI can improve work efficiency by accurate delineation of infections in X-ray and CT images, facilitating subsequent quantification. Moreover, the computer-aided platforms help radiologists make clinical decisions, i.e., for disease diagnosis, tracking, and prognosis. In this review paper, we thus cover the entire pipeline of medical imaging and analysis techniques involved with COVID-19, including image acquisition, segmentation, diagnosis, and follow-up. We particularly focus on the integration of AI with X-ray and CT, both of which are widely used in the frontline hospitals, in order to depict the latest progress of medical imaging and radiology fighting against COVID-19.


Subject(s)
COVID-19/diagnosis , SARS-CoV-2/pathogenicity , Artificial Intelligence , Humans , Pandemics/prevention & control , Tomography, X-Ray Computed/methods
10.
Pattern Recognit ; 122: 108341, 2022 Feb.
Article in English | MEDLINE | ID: covidwho-1415697

ABSTRACT

Segmentation of infections from CT scans is important for accurate diagnosis and follow-up in tackling the COVID-19. Although the convolutional neural network has great potential to automate the segmentation task, most existing deep learning-based infection segmentation methods require fully annotated ground-truth labels for training, which is time-consuming and labor-intensive. This paper proposed a novel weakly supervised segmentation method for COVID-19 infections in CT slices, which only requires scribble supervision and is enhanced with the uncertainty-aware self-ensembling and transformation-consistent techniques. Specifically, to deal with the difficulty caused by the shortage of supervision, an uncertainty-aware mean teacher is incorporated into the scribble-based segmentation method, encouraging the segmentation predictions to be consistent under different perturbations for an input image. This mean teacher model can guide the student model to be trained using information in images without requiring manual annotations. On the other hand, considering the output of the mean teacher contains both correct and unreliable predictions, equally treating each prediction in the teacher model may degrade the performance of the student network. To alleviate this problem, the pixel level uncertainty measure on the predictions of the teacher model is calculated, and then the student model is only guided by reliable predictions from the teacher model. To further regularize the network, a transformation-consistent strategy is also incorporated, which requires the prediction to follow the same transformation if a transform is performed on an input image of the network. The proposed method has been evaluated on two public datasets and one local dataset. The experimental results demonstrate that the proposed method is more effective than other weakly supervised methods and achieves similar performance as those fully supervised.

11.
NPJ Digit Med ; 4(1): 124, 2021 Aug 16.
Article in English | MEDLINE | ID: covidwho-1360212

ABSTRACT

Most prior studies focused on developing models for the severity or mortality prediction of COVID-19 patients. However, effective models for recovery-time prediction are still lacking. Here, we present a deep learning solution named iCOVID that can successfully predict the recovery-time of COVID-19 patients based on predefined treatment schemes and heterogeneous multimodal patient information collected within 48 hours after admission. Meanwhile, an interpretable mechanism termed FSR is integrated into iCOVID to reveal the features greatly affecting the prediction of each patient. Data from a total of 3008 patients were collected from three hospitals in Wuhan, China, for large-scale verification. The experiments demonstrate that iCOVID can achieve a time-dependent concordance index of 74.9% (95% CI: 73.6-76.3%) and an average day error of 4.4 days (95% CI: 4.2-4.6 days). Our study reveals that treatment schemes, age, symptoms, comorbidities, and biomarkers are highly related to recovery-time predictions.

12.
BMC Med Imaging ; 21(1): 57, 2021 03 23.
Article in English | MEDLINE | ID: covidwho-1148211

ABSTRACT

BACKGROUND: Spatial and temporal lung infection distributions of coronavirus disease 2019 (COVID-19) and their changes could reveal important patterns to better understand the disease and its time course. This paper presents a pipeline to analyze statistically these patterns by automatically segmenting the infection regions and registering them onto a common template. METHODS: A VB-Net is designed to automatically segment infection regions in CT images. After training and validating the model, we segmented all the CT images in the study. The segmentation results are then warped onto a pre-defined template CT image using deformable registration based on lung fields. Then, the spatial distributions of infection regions and those during the course of the disease are calculated at the voxel level. Visualization and quantitative comparison can be performed between different groups. We compared the distribution maps between COVID-19 and community acquired pneumonia (CAP), between severe and critical COVID-19, and across the time course of the disease. RESULTS: For the performance of infection segmentation, comparing the segmentation results with manually annotated ground-truth, the average Dice is 91.6% ± 10.0%, which is close to the inter-rater difference between two radiologists (the Dice is 96.1% ± 3.5%). The distribution map of infection regions shows that high probability regions are in the peripheral subpleural (up to 35.1% in probability). COVID-19 GGO lesions are more widely spread than consolidations, and the latter are located more peripherally. Onset images of severe COVID-19 (inpatients) show similar lesion distributions but with smaller areas of significant difference in the right lower lobe compared to critical COVID-19 (intensive care unit patients). About the disease course, critical COVID-19 patients showed four subsequent patterns (progression, absorption, enlargement, and further absorption) in our collected dataset, with remarkable concurrent HU patterns for GGO and consolidations. CONCLUSIONS: By segmenting the infection regions with a VB-Net and registering all the CT images and the segmentation results onto a template, spatial distribution patterns of infections can be computed automatically. The algorithm provides an effective tool to visualize and quantify the spatial patterns of lung infection diseases and their changes during the disease course. Our results demonstrate different patterns between COVID-19 and CAP, between severe and critical COVID-19, as well as four subsequent disease course patterns of the severe COVID-19 patients studied, with remarkable concurrent HU patterns for GGO and consolidations.


Subject(s)
COVID-19/diagnostic imaging , Community-Acquired Infections/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Algorithms , Disease Progression , Humans , Pneumonia/diagnostic imaging , Tomography, X-Ray Computed/methods
13.
Med Image Anal ; 69: 101978, 2021 04.
Article in English | MEDLINE | ID: covidwho-1062515

ABSTRACT

How to fast and accurately assess the severity level of COVID-19 is an essential problem, when millions of people are suffering from the pandemic around the world. Currently, the chest CT is regarded as a popular and informative imaging tool for COVID-19 diagnosis. However, we observe that there are two issues - weak annotation and insufficient data that may obstruct automatic COVID-19 severity assessment with CT images. To address these challenges, we propose a novel three-component method, i.e., 1) a deep multiple instance learning component with instance-level attention to jointly classify the bag and also weigh the instances, 2) a bag-level data augmentation component to generate virtual bags by reorganizing high confidential instances, and 3) a self-supervised pretext component to aid the learning process. We have systematically evaluated our method on the CT images of 229 COVID-19 cases, including 50 severe and 179 non-severe cases. Our method could obtain an average accuracy of 95.8%, with 93.6% sensitivity and 96.4% specificity, which outperformed previous works.


Subject(s)
COVID-19/diagnostic imaging , Adolescent , Adult , Aged , Aged, 80 and over , Child , Child, Preschool , Deep Learning , Female , Humans , Infant , Infant, Newborn , Male , Middle Aged , SARS-CoV-2 , Severity of Illness Index , Supervised Machine Learning , Tomography, X-Ray Computed , Young Adult
14.
Int J Infect Dis ; 102: 316-318, 2021 Jan.
Article in English | MEDLINE | ID: covidwho-1060468

ABSTRACT

The ongoing worldwide COVID-19 pandemic has become a huge threat to global public health. Using CT image, 3389 COVID-19 patients, 1593 community-acquired pneumonia (CAP) patients, and 1707 nonpneumonia subjects were included to explore the different patterns of lung and lung infection. We found that COVID-19 patients have a significant reduced lung volume with increased density and mass, and the infections tend to present as bilateral lower lobes. The findings provide imaging evidence to improve our understanding of COVID-19.


Subject(s)
COVID-19/diagnostic imaging , Lung/physiopathology , Big Data , COVID-19/physiopathology , COVID-19/virology , Community-Acquired Infections/diagnostic imaging , Community-Acquired Infections/physiopathology , Community-Acquired Infections/virology , Female , Humans , Lung/diagnostic imaging , Lung/virology , Male , Middle Aged , Pandemics , Respiratory Function Tests , Retrospective Studies , SARS-CoV-2/physiology , Tomography, X-Ray Computed/methods
15.
Pattern Recognit ; 113: 107828, 2021 May.
Article in English | MEDLINE | ID: covidwho-1033799

ABSTRACT

Understanding chest CT imaging of the coronavirus disease 2019 (COVID-19) will help detect infections early and assess the disease progression. Especially, automated severity assessment of COVID-19 in CT images plays an essential role in identifying cases that are in great need of intensive clinical care. However, it is often challenging to accurately assess the severity of this disease in CT images, due to variable infection regions in the lungs, similar imaging biomarkers, and large inter-case variations. To this end, we propose a synergistic learning framework for automated severity assessment of COVID-19 in 3D CT images, by jointly performing lung lobe segmentation and multi-instance classification. Considering that only a few infection regions in a CT image are related to the severity assessment, we first represent each input image by a bag that contains a set of 2D image patches (with each cropped from a specific slice). A multi-task multi-instance deep network (called M 2 UNet) is then developed to assess the severity of COVID-19 patients and also segment the lung lobe simultaneously. Our M 2 UNet consists of a patch-level encoder, a segmentation sub-network for lung lobe segmentation, and a classification sub-network for severity assessment (with a unique hierarchical multi-instance learning strategy). Here, the context information provided by segmentation can be implicitly employed to improve the performance of severity assessment. Extensive experiments were performed on a real COVID-19 CT image dataset consisting of 666 chest CT images, with results suggesting the effectiveness of our proposed method compared to several state-of-the-art methods.

16.
Med Image Anal ; 68: 101910, 2021 02.
Article in English | MEDLINE | ID: covidwho-943426

ABSTRACT

The coronavirus disease, named COVID-19, has become the largest global public health crisis since it started in early 2020. CT imaging has been used as a complementary tool to assist early screening, especially for the rapid identification of COVID-19 cases from community acquired pneumonia (CAP) cases. The main challenge in early screening is how to model the confusing cases in the COVID-19 and CAP groups, with very similar clinical manifestations and imaging features. To tackle this challenge, we propose an Uncertainty Vertex-weighted Hypergraph Learning (UVHL) method to identify COVID-19 from CAP using CT images. In particular, multiple types of features (including regional features and radiomics features) are first extracted from CT image for each case. Then, the relationship among different cases is formulated by a hypergraph structure, with each case represented as a vertex in the hypergraph. The uncertainty of each vertex is further computed with an uncertainty score measurement and used as a weight in the hypergraph. Finally, a learning process of the vertex-weighted hypergraph is used to predict whether a new testing case belongs to COVID-19 or not. Experiments on a large multi-center pneumonia dataset, consisting of 2148 COVID-19 cases and 1182 CAP cases from five hospitals, are conducted to evaluate the prediction accuracy of the proposed method. Results demonstrate the effectiveness and robustness of our proposed method on the identification of COVID-19 in comparison to state-of-the-art methods.


Subject(s)
COVID-19/diagnostic imaging , Community-Acquired Infections/diagnostic imaging , Diagnosis, Computer-Assisted/methods , Machine Learning , Pneumonia, Viral/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Tomography, X-Ray Computed , China , Community-Acquired Infections/virology , Datasets as Topic , Diagnosis, Differential , Humans , Pneumonia, Viral/virology , SARS-CoV-2
17.
Med Phys ; 48(4): 1633-1645, 2021 Apr.
Article in English | MEDLINE | ID: covidwho-938495

ABSTRACT

OBJECTIVE: Computed tomography (CT) provides rich diagnosis and severity information of COVID-19 in clinical practice. However, there is no computerized tool to automatically delineate COVID-19 infection regions in chest CT scans for quantitative assessment in advanced applications such as severity prediction. The aim of this study was to develop a deep learning (DL)-based method for automatic segmentation and quantification of infection regions as well as the entire lungs from chest CT scans. METHODS: The DL-based segmentation method employs the "VB-Net" neural network to segment COVID-19 infection regions in CT scans. The developed DL-based segmentation system is trained by CT scans from 249 COVID-19 patients, and further validated by CT scans from other 300 COVID-19 patients. To accelerate the manual delineation of CT scans for training, a human-involved-model-iterations (HIMI) strategy is also adopted to assist radiologists to refine automatic annotation of each training case. To evaluate the performance of the DL-based segmentation system, three metrics, that is, Dice similarity coefficient, the differences of volume, and percentage of infection (POI), are calculated between automatic and manual segmentations on the validation set. Then, a clinical study on severity prediction is reported based on the quantitative infection assessment. RESULTS: The proposed DL-based segmentation system yielded Dice similarity coefficients of 91.6% ± 10.0% between automatic and manual segmentations, and a mean POI estimation error of 0.3% for the whole lung on the validation dataset. Moreover, compared with the cases with fully manual delineation that often takes hours, the proposed HIMI training strategy can dramatically reduce the delineation time to 4 min after three iterations of model updating. Besides, the best accuracy of severity prediction was 73.4% ± 1.3% when the mass of infection (MOI) of multiple lung lobes and bronchopulmonary segments were used as features for severity prediction, indicating the potential clinical application of our quantification technique on severity prediction. CONCLUSIONS: A DL-based segmentation system has been developed to automatically segment and quantify infection regions in CT scans of COVID-19 patients. Quantitative evaluation indicated high accuracy in automatic infection delineation and severity prediction.


Subject(s)
COVID-19/diagnostic imaging , Deep Learning , Image Interpretation, Computer-Assisted , Lung/diagnostic imaging , Tomography, X-Ray Computed , Humans
18.
Med Image Anal ; 67: 101824, 2021 01.
Article in English | MEDLINE | ID: covidwho-888729

ABSTRACT

With the rapidly worldwide spread of Coronavirus disease (COVID-19), it is of great importance to conduct early diagnosis of COVID-19 and predict the conversion time that patients possibly convert to the severe stage, for designing effective treatment plans and reducing the clinicians' workloads. In this study, we propose a joint classification and regression method to determine whether the patient would develop severe symptoms in the later time formulated as a classification task, and if yes, the conversion time will be predicted formulated as a classification task. To do this, the proposed method takes into account 1) the weight for each sample to reduce the outliers' influence and explore the problem of imbalance classification, and 2) the weight for each feature via a sparsity regularization term to remove the redundant features of the high-dimensional data and learn the shared information across two tasks, i.e., the classification and the regression. To our knowledge, this study is the first work to jointly predict the disease progression and the conversion time, which could help clinicians to deal with the potential severe cases in time or even save the patients' lives. Experimental analysis was conducted on a real data set from two hospitals with 408 chest computed tomography (CT) scans. Results show that our method achieves the best classification (e.g., 85.91% of accuracy) and regression (e.g., 0.462 of the correlation coefficient) performance, compared to all comparison methods. Moreover, our proposed method yields 76.97% of accuracy for predicting the severe cases, 0.524 of the correlation coefficient, and 0.55 days difference for the conversion time.


Subject(s)
COVID-19/classification , COVID-19/diagnostic imaging , Pneumonia, Viral/classification , Pneumonia, Viral/diagnostic imaging , Tomography, X-Ray Computed/methods , Disease Progression , Female , Humans , Male , Middle Aged , Predictive Value of Tests , Radiographic Image Interpretation, Computer-Assisted , Radiography, Thoracic , SARS-CoV-2 , Severity of Illness Index , Time Factors
19.
Phys Med Biol ; 66(3): 035015, 2021 01 26.
Article in English | MEDLINE | ID: covidwho-842038

ABSTRACT

The coronavirus disease 2019 (COVID-19) is now a global pandemic. Tens of millions of people have been confirmed with infection, and also more people are suspected. Chest computed tomography (CT) is recognized as an important tool for COVID-19 severity assessment. As the number of chest CT images increases rapidly, manual severity assessment becomes a labor-intensive task, delaying appropriate isolation and treatment. In this paper, a study of automatic severity assessment for COVID-19 is presented. Specifically, chest CT images of 118 patients (age 46.5 ± 16.5 years, 64 male and 54 female) with confirmed COVID-19 infection are used, from which 63 quantitative features and 110 radiomics features are derived. Besides the chest CT image features, 36 laboratory indices of each patient are also used, which can provide complementary information from a different view. A random forest (RF) model is trained to assess the severity (non-severe or severe) according to the chest CT image features and laboratory indices. Importance of each chest CT image feature and laboratory index, which reflects the correlation to the severity of COVID-19, is also calculated from the RF model. Using three-fold cross-validation, the RF model shows promising results: 0.910 (true positive ratio), 0.858 (true negative ratio) and 0.890 (accuracy), along with AUC of 0.98. Moreover, several chest CT image features and laboratory indices are found to be highly related to COVID-19 severity, which could be valuable for the clinical diagnosis of COVID-19.


Subject(s)
COVID-19/diagnostic imaging , Radiography, Thoracic , Tomography, X-Ray Computed , Adult , Area Under Curve , False Positive Reactions , Female , Humans , Laboratories , Lung/diagnostic imaging , Male , Middle Aged , Pandemics , Retrospective Studies , Severity of Illness Index
20.
IEEE Trans Med Imaging ; 39(8): 2595-2605, 2020 Aug.
Article in English | MEDLINE | ID: covidwho-690930

ABSTRACT

The coronavirus disease (COVID-19) is rapidly spreading all over the world, and has infected more than 1,436,000 people in more than 200 countries and territories as of April 9, 2020. Detecting COVID-19 at early stage is essential to deliver proper healthcare to the patients and also to protect the uninfected population. To this end, we develop a dual-sampling attention network to automatically diagnose COVID-19 from the community acquired pneumonia (CAP) in chest computed tomography (CT). In particular, we propose a novel online attention module with a 3D convolutional network (CNN) to focus on the infection regions in lungs when making decisions of diagnoses. Note that there exists imbalanced distribution of the sizes of the infection regions between COVID-19 and CAP, partially due to fast progress of COVID-19 after symptom onset. Therefore, we develop a dual-sampling strategy to mitigate the imbalanced learning. Our method is evaluated (to our best knowledge) upon the largest multi-center CT data for COVID-19 from 8 hospitals. In the training-validation stage, we collect 2186 CT scans from 1588 patients for a 5-fold cross-validation. In the testing stage, we employ another independent large-scale testing dataset including 2796 CT scans from 2057 patients. Results show that our algorithm can identify the COVID-19 images with the area under the receiver operating characteristic curve (AUC) value of 0.944, accuracy of 87.5%, sensitivity of 86.9%, specificity of 90.1%, and F1-score of 82.0%. With this performance, the proposed algorithm could potentially aid radiologists with COVID-19 diagnosis from CAP, especially in the early stage of the COVID-19 outbreak.


Subject(s)
Coronavirus Infections/diagnostic imaging , Deep Learning , Image Interpretation, Computer-Assisted/methods , Pneumonia, Viral/diagnostic imaging , Algorithms , Betacoronavirus , COVID-19 , Community-Acquired Infections/diagnostic imaging , Humans , Pandemics , ROC Curve , Radiography, Thoracic , SARS-CoV-2 , Tomography, X-Ray Computed
SELECTION OF CITATIONS
SEARCH DETAIL